181 research outputs found

    Methods in Psychological Research

    Get PDF
    Psychologists collect empirical data with various methods for different reasons. These diverse methods have their strengths as well as weaknesses. Nonetheless, it is possible to rank them in terms of different critieria. For example, the experimental method is used to obtain the least ambiguous conclusion. Hence, it is the best suited to corroborate conceptual, explanatory hypotheses. The interview method, on the other hand, gives the research participants a kind of emphatic experience that may be important to them. It is for the reason the best method to use in a clinical setting. All non-experimental methods owe their origin to the interview method. Quasi-experiments are suited for answering practical questions when ecological validity is importa

    Experimentation in Psychology--Rationale, Concepts and Issues

    Get PDF
    An experiment is made up of two or more data-collection conditons that are identical in all aspects, but one. It owes its design to an inductive principle and its hypothesis to deductive logic. It is the most suited for corroborating explanatory theries , ascertaining functional relationship, or assessing the substantive effectiveness of a manipulation. Also discussed are (a) the three meanings of 'control,' (b) the issue of ecological validity, (c) the distinction between theory-corroboration and agricultural-model experiments, and (d) the distinction among the hypotheses at four levels of abstraction that are implicit in an experiment

    Some meta-theoretical issues relating to statistical inference

    Get PDF
    This paper is a reply to some comments made by Green (2002) on Chow’s (2002) critique of Wilkinson and Task Force's (1999) report on statistical inference. Issues raised are (a) the inappropriateness of accepting methodological prescription on authority, (ii) the vacuity of non-falsifiable theories, (iii) the need to distinguish between experiment and meta﷓experiment, and (iv) the probability foundation of the null﷓hypothesis significance﷓test procedure (NHSTP). This reply is intended to foster a better understanding of research methods in general, and of the role of NHSTP in empirical research in particular

    Auotmatic detection, consistent mapping, and training

    Get PDF
    Results from two experiments showed that a flat displayï·“size function was found under the consistent mapping (CM) condition despite the facts that there was no extensive CM training and that the stimulusï·“response (Sï·“R) consistency was only an intrasession manipulation. A confounding factor might be responsible for the fact that the consistent and the varied Sï·“R mapping conditions gave rise to different displayï·“size functions in Schneider and Shiffrin's (1977) study. Their claim that automatic detection and controlled search are qualitatively different is also discussed

    Iconic memory of icon?

    Get PDF
    The objectives of the present commentary are to show that (1) one important theoretical property of iconic memory is inconsistent with a retinotopic icon, (2) data difficult for the notion of an icon do not necessarily challenge the notion of an iconic store, (3) the iconic store, as a theoretical mechanism, is an ecologically valid one, and (4) the rationale of experimentation is such that the experimental task need not mimic the phenomenon being studied

    Issues in Statistical Inference

    Get PDF
    The APA Task Force’s treatment of research methods is critically examined. The present defense of the experiment rests on showing that (a) the control group cannot be replaced by the contrast group, (b) experimental psychologists have valid reasons to use non-randomly selected subjects, (c) there is no evidential support for the experimenter expectancy effect, (d) the Task Force had misrepresented the role of inductive and deductive logic, and (e) the validity of experimental data does not require appealing to the effect size or statistical power

    Cognitive Science and Psychology

    Get PDF
    The protocol algorithm abstracted from a human cognizer's own narrative in the course of doing a cognitive task is an explanation of the corresponding mental activity in Pylyshyn's (1984) virtual machine model of mind. Strong equivalence between an analytic algorithm and the protocol algorithm is an index of validity of the explanatory model. Cognitive psychologists may not find the index strong equivalence useful as a means to ensure that a theory is not circular because (a) research data are also used as foundation data, (b) there is no justification for the relationship between a toï·“beï·“validated theory and its criterion of validity, and (c) foundation data, validation criterion and toï·“beï·“validated theory are not independent in cognitive science. There is also the difficulty with not knowing what psychological primitives are

    Iconic memory, location information, and partial report.

    Get PDF

    Inter-laboratory proficiency testing scheme for tumour next-generation sequencing in Ontario: A pilot study

    Get PDF
    Background A pilot inter-laboratory proficiency scheme for 5 Ontario clinical laboratories testing tumour samples for the Ontario-wide Cancer Targeted Nucleic Acid Evaluation (OCTANE) study was undertaken to assess proficiency in the identification and reporting of next-generation sequencing (NGS) test results in solid tumour testing from archival formalin-fixed, paraffin-embedded (FFPE) tissue. Methods One laboratory served as the reference centre and provided samples to 4 participating laboratories. An analyte-based approach was applied: each participating laboratory received 10 FFPE tissue specimens profiled at the reference centre, with tumour site and histology provided. Laboratories performed testing per their standard NGS tumour test protocols. Items returned for assessment included genes and variants that would be typically reported in routine clinical testing and variant call format (VCF) files to allow for assessment of NGS technical quality. Results Two main aspects were assessed: Technical quality and accuracy of identification of exonic variants Site-specific reporting practices Technical assessment included evaluation of exonic variant identification, quality assessment of the VCF files to evaluate base calling, variant allele frequency, and depth of coverage for all exonic variants. Concordance at 100% was observed from all sites in the technical identification of 98 exonic variants across the 10 cases. Variability between laboratories in the choice of variants considered clinically reportable was significant. Of the 38 variants reported as clinically relevant by at least 1 site, only 3 variants were concordantly reported by all participating centres as clinically relevant. Conclusions Although excellent technical concordance for NGS tumour profiling was observed across participating institutions, differences in the reporting of clinically relevant variants were observed, highlighting reporting as a gap where consensus on the part of Ontario laboratories is needed
    • …
    corecore